Search Results: "dave"

28 March 2017

Keith Packard: DRM-lease

DRM display resource leasing (kernel side) So, you've got a fine head-mounted display and want to explore the delights of virtual reality. Right now, on Linux, that means getting the window system to cooperate because the window system is the DRM master and holds sole access to all display resources. So, you plug in your device, play with RandR to get it displaying bits from the window system and then carefully configure your VR application to use the whole monitor area and hope that the desktop will actually grant you the boon of page flipping so that you will get reasonable performance and maybe not even experience tearing. Results so far have been mixed, and depend on a lot of pieces working in ways that aren't exactly how they were designed to work. We could just hack up the window system(s) and try to let applications reserve the HMD monitors and somehow removing them from the normal display area so that other applications don't randomly pop up in the middle of the screen. That would probably work, and would take advantage of much of the existing window system infrastructure for setting video modes and performing page flips. However, we've got a pretty spiffy standard API in the kernel for both of those, and getting the window system entirely out of the way seems like something worth trying. I spent a few hours in Hobart chatting with Dave Airlie during LCA and discussed how this might actually work. Goals
  1. Use KMS interfaces directly from the VR application to drive presentation to the HMD.
  2. Make sure the window system clients never see the HMD as a connected monitor.
  3. Maybe let logind (or other service) manage the KMS resources and hand them out to the window system and VR applications.
Limitations
  1. Don't make KMS resources appear and disappear. It turns out applications get confused when the set of available CRTCs, connectors and encoders changes at runtime.
An Outline for Multiple DRM masters By the end of our meeting in Hobart, Dave had sketched out a fairly simple set of ideas with me. We'd add support in the kernel to create additional DRM masters. Then, we'd make it possible to 'hide' enough state about the various DRM resources so that each DRM master would automagically use disjoint subsets of resources. In particular, we would.
  1. Pretend that connectors were always disconnected
  2. Mask off crtc and encoder bits so that some of them just didn't seem very useful.
  3. Block access to resources controlled by other DRM masters, just in case someone tried to do the wrong thing.
Refinement with Eric over Swedish Pancakes A couple of weeks ago, Eric Anholt and I had breakfast at the original pancake house and chatted a bit about this stuff. He suggested that the right interface for controlling these new DRM masters was through the existing DRM master interface, and that we could add new ioctls that the current DRM master could invoke to create and manage them. Leasing as a Model I spent some time just thinking about how this might work and came up with a pretty simple metaphor for these new DRM masters. The original DRM master on each VT "owns" the output resources and has final say over their use. However, a DRM master can create another DRM master and "lease" resources it has control over to the new DRM master. Once leased, resources cannot be controlled by the owner unless the owner cancels the lease, or the new DRM master is closed. Here's some terminology:
DRM Master
Any DRM file which can perform mode setting.
Owner
The original DRM Master, created by opening /dev/dri/card*
Lessor
A DRM master which has leased out resources to one or more other DRM masters.
Lessee
A DRM master which controls resources leased from another DRM master. Each Lessee leases resources from a single Lessor.
Lessee ID
An integer which uniquely identifies a lessee within the tree of DRM masters descending from a single Owner.
Lease
The contract between the Lessor and Lessee which identifies which resources which may be controlled by the Lessee. All of the resources must be owned by or leased to the Lessor.
With Eric's input, the interface to create a lease was pretty simple to write down:
int drmModeCreateLease(int fd,
               const uint32_t *objects,
               int num_objects,
               int flags,
               uint32_t *lessee_id);
Given an FD to a DRM master, and a list of objects to lease, a new DRM master FD is returned that holds a lease to those objects. 'flags' can be any combination of O_CLOEXEC and O_NONBLOCK for the newly minted file descriptor. Of course, the owner might want to take some resources back, or even grant new resources to the lessee. So, I added an interface that rewrites the terms of the lease with a new set of objects:
int drmModeChangeLease(int fd,
               uint32_t lessee_id,
               const uint32_t *objects,
               int num_objects);
Note that nothing here makes any promises about the state of the objects across changes in the lease status; the lessor and lessee are expected to perform whatever modesetting is required for the objects to be useful to them. Window System Integration There are two ways to integrate DRM leases into the window system environment:
  1. Have logind "lease" most resources to the window system. When a HMD is connected, it would lease out suitable resources to the VR environment.
  2. Have the window system "own" all of the resources and then add window system interfaces to create new DRM masters leased from its DRM master.
I'll probably go ahead and do 2. in X and see what that looks like. One trick with any of this will be to hide HMDs from any RandR clients listening in on the window system. You probably don't want the window system to tell the desktop that a new monitor has been connected, have it start reconfiguring things, and then have your VR application create a new DRM master, making the HMD appear to have disconnected to the window system and have that go reconfigure things all over again. I'm not sure how this might work, but perhaps having the VR application register something like a passive grab on hot plug events might make sense? Essentially, you want it to hear about monitor connect events, go look to see if the new monitor is one it wants, and if not, release that to other X clients for their use. This can be done in stages, with the ability to create a new DRM master over X done first, and then cleaning up the hotplug stuff later on. Current Status I hacked up the kernel to support the drmModeCreateLease API, and then hacked up kmscube to run two threads with different sets of KMS resources. That ran for nearly a minute before crashing and requiring a reboot. I think there may be some locking issues with page flips from two threads to the same device. I think I also made the wrong decision about how to handle lessors closing down. I tried to let the lessors get deleted and then 'orphan' the lessees. I've rewritten that so that lessees hold a reference on their lessor, keeping the lessor in place until the lessee shuts down. I've also written the kernel parts of the drmModeChangeLease support. Questions

14 March 2017

Keith Packard: Valve

Consulting for Valve in my spare time Valve Software has asked me to help work on a couple of Linux graphics issues, so I'll be doing a bit of consulting for them in my spare time. It should be an interesting diversion from my day job working for Hewlett Packard Enterprise on Memory Driven Computing and other fun things. First thing on my plate is helping support head-mounted displays better by getting the window system out of the way. I spent some time talking with Dave Airlie and Eric Anholt about how this might work and have started on the kernel side of that. A brief synopsis is that we'll split off some of the output resources from the window system and hand them to the HMD compositor to perform mode setting and page flips. After that, I'll be working out how to improve frame timing reporting back to games from a composited desktop under X. Right now, a game running on X with a compositing manager can't tell when each frame was shown, nor accurately predict when a new frame will be shown. This makes smooth animation rather difficult.

15 February 2017

Antoine Beaupr : A look at password managers

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords. Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all. But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers.

KeePass: the popular alternative The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass. An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive. KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general. The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are present in KeePass but missing from KeePassX like:
  • auto-type on Linux, Mac OS, and Windows
  • database merging which allows multi-user support
  • using the web site's favicon in the interface
So far, the maintainers of KeePassXC seem to be open to re-merging the project "if the original maintainer of KeePassX in the future will be more active and will accept our merge and changes". I can confirm that, at the time of writing, the original KeePassX project now has 79 pending pull requests and only one pull request was merged since the last release, which was 2.0.3 in September 2016. While KeePass and derivatives allow multiple users to access the same database through the merging process, they do not support multi-party access to a single database. This may be a limiting factor for larger organizations, where you may need, for example, a different password set for different technical support team levels. The solution in this case is to use separate databases for each team, with each team using a different shared secret.

Pass: the standard password manager? I am currently using password-store, or pass, as a password manager. It aims to be "the standard Unix password manager". Pass is a GnuPG-based password manager that features a surprising number of features given its small size:
  • copy-paste support
  • Git integration
  • multi-user/group support
  • pluggable extensions (in the upcoming 1.7 release)
The command-line interface is simple to use and intuitive. The following, will, for example, create a pass repository, a 20 character password for your LWN account and copy it to the clipboard:
    $ pass init
    $ pass generate -c lwn 20
The main issue with pass is that it doesn't encrypt the name of those entries: if someone were to compromise my machine, they could easily see which sites I have access to simply by listing the passwords stored in ~/.password-store. This is a deliberate design decision by the upstream project, as stated by a mailing list participant, Allan Odgaard:
Using a single file per item has the advantage of shell completion, using version control, browse, move and rename the items in a file browser, edit them in a regular editor (that does GPG, or manually run GPG first), etc.
Odgaard goes on to point out that there are alternatives that do encrypt the entire database (including the site names) if users really need that feature. Furthermore, there is a tomb plugin for pass that encrypts the password store in a LUKS container (called a "tomb"), although it requires explicitly opening and closing the container, which makes it only marginally better than using full disk encryption system-wide. One could also argue that password file names do not hold secret information, only the site name and username, perhaps, and that doesn't require secrecy. I do believe those should be kept secret, however, as they could be used to discover (or prove) which sites you have access to and then used to perform other attacks. One could draw a parallel with the SSH known_hosts file, which used to be plain text but is now hashed so that hosts are more difficult to discover. Also, sharing a database for multi-user support will require some sort of file-sharing mechanism. Given the integrated Git support, this will likely involve setting up a private Git repository for your team, something which may not be accessible to the average Linux user. Nothing keeps you, however, from sharing the ~/.password-store directory through another file sharing mechanism like (say) Syncthing or Dropbox. You can use multiple distinct databases easily using the PASSWORD_STORE_DIR environment variable. For example, you could have a shell alias to use a different repository for your work passwords with:
    alias work-pass="PASSWORD_STORE_DIR=~/work-passwords pass"
Group support comes from a clever use of the GnuPG multiple-recipient encryption support. You simply have to specify multiple OpenPGP identities when initializing the repository, which also works in subdirectories:
    $ pass init -p Ateam me@example.com joelle@example.com
    mkdir: created directory '/home/me/.password-store/Ateam'
    Password store initialized for me@example.com, joelle@example.com
    [master 0e3dbe7] Set GPG id to me@example.com, joelle@example.com.
     1 file changed, 2 insertions(+)
     create mode 100644 Ateam/.gpg-id
The above will configure pass to encrypt the passwords in the Ateam directory for me@example.com and joelle@example.com. Pass depends on GnuPG to do the right thing when encrypting files and how those identities are treated is entirely delegated to GnuPG's default configuration. This could lead to problems if arbitrary keys can be injected into your key ring, which could confuse GnuPG. I would therefore recommend using full key fingerprints instead of user identifiers. Regarding the actual encryption algorithms used, in my tests, GnuPG 1.4.18 and 2.1.18 seemed to default to 256-bit AES for encryption, but that has not always been the case. The chosen encryption algorithm actually depends on the recipient's key preferences, which may vary wildly: older keys and versions may use anything from 128-bit AES to CAST5 or Triple DES. To figure out which algorithm GnuPG chose, you may want to try this pipeline:
    $ echo test   gpg -e -r you@example.com   gpg -d -v
    [...]
    gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created XXXXX
      "You Person You <you@example.com>"
    gpg: AES256 encrypted data
    gpg: original file name=''
    test
As you can see, pass is primarily a command-line application, which may make it less accessible to regular users. The community has produced different graphical interfaces that are either using pass directly or operate on the storage with their own GnuPG integration. I personally use pass in combination with Rofi to get quick access to my passwords, but less savvy users may want to try the QtPass interface, which should be more user-friendly. QtPass doesn't actually depend on pass and can use GnuPG directly to interact with the pass database; it is available for Linux, BSD, OS X, and Windows.

Browser password managers Most users are probably already using a password manager through their web browser's "remember password" functionality. For example, Chromium will ask if you want it to remember passwords and encrypt them with your operating system's facilities. For Windows, this encrypts the passwords with your login password and, for GNOME, it will store the passwords in the gnome-keyring storage. If you synchronize your Chromium settings with your Google account, Chromium will store those passwords on Google's servers, encrypted with a key that is stored in the Google Account itself. So your passwords are then only as safe as your Google account. Note that this was covered here in 2010, although back then Chromium didn't synchronize with the Google cloud or encrypt with the system-level key rings. That facility was only added in 2013. In Firefox, there's an optional, profile-specific master password that unlocks all passwords. In this case, the issue is that browsers are generally always open, so the vault is always unlocked. And this is for users that actually do pick a master password; users are often completely unaware that they should set one. The unlocking mechanism is a typical convenience-security trade-off: either users need to constantly input their master passwords to login or they don't, and the passwords are available in the clear. In this case, Chromium's approach of actually asking users to unlock their vault seems preferable, even though the developers actually refused to implement the feature for years. Overall, I would recommend against using a browser-based password manager. Even if it is not used for critical sites, you will end up with hundreds of such passwords that are vulnerable while the browser is running (in the case of Firefox) or at the whim of Google (in the case of Chromium). Furthermore, the "auto-fill" feature that is often coupled with browser-based password managers is often vulnerable to serious attacks, which is mentioned below. Finally, because browser-based managers generally lack a proper password generator, users may fail to use properly generated passwords, so they can then be easily broken. A password generator has been requested for Firefox, according to this feature request opened in 2007, and there is a password generator in Chrome, but it is disabled by default and hidden in the mysterious chrome://flags URL.

Other notable password managers Another alternative password manager, briefly mentioned in the previous article, is the minimalistic Assword password manager that, despite its questionable name, is also interesting. Its main advantage over pass is that it uses a single encrypted JSON file for storage, and therefore doesn't leak the name of the entries by default. In addition to copy/paste, Assword also supports automatically entering passphrases in fields using the xdo library. Like pass, it uses GnuPG to encrypt passphrases. According to Assword maintainer Daniel Kahn Gillmor in email, the main issue with Assword is "interaction between generated passwords and insane password policies". He gave the example of the Time-Warner Cable registration form that requires, among other things, "letters and numbers, between 8 and 16 characters and not repeat the same characters 3 times in a row". Another well-known password manager is the commercial LastPass service which released a free-software command-line client called lastpass-cli about three years ago. Unfortunately, the server software of the lastpass.com service is still proprietary. And given that LastPass has had at least two serious security breaches since that release, one could legitimately question whether this is a viable solution for storing important secrets. In general, web-based password managers expose a whole new attack surface that is not present in regular password managers. A 2014 study by University of California researchers showed that, out of five password managers studied, every one of them was vulnerable to at least one of the vulnerabilities studied. LastPass was, in particular, vulnerable to a cross-site request forgery (CSRF) attack that allowed an attacker to bypass account authentication and access the encrypted database.

Problems with password managers When you share a password database within a team, how do you remove access to a member of the team? While you can, for example, re-encrypt a pass database with new keys (thereby removing or adding certain accesses) or change the password on a KeePass database, a hostile party could have made a backup of the database before the revocation. Indeed, in the case of pass, older entries are still in the Git history. So access revocation is a problematic issue found with all shared password managers, as it may actually mean going through every password and changing them online. This fundamental problem with shared secrets can be better addressed with a tool like Vault or SFLvault. Those tools aim to provide teams with easy ways to store dynamic tokens like API keys or service passwords and share them not only with other humans, but also make them accessible to machines. The general idea of those projects is to store secrets in a central server and send them directly to relevant services without human intervention. This way, passwords are not actually shared anymore, which is similar in spirit to the approach taken by centralized authentication systems like Kerberos. If you are looking at password management for teams, those projects may be worth a look. Furthermore, some password managers that support auto-typing were found to be vulnerable to HTML injection attacks: if some third-party ad or content is able to successfully hijack the parent DOM content, it masquerades as a form that could fool auto-typing software as demonstrated by this paper that was submitted at USENIX 2014. Fortunately, KeePass was not vulnerable according to the security researchers, but LastPass was, again, vulnerable.

Future of password managers? All of the solutions discussed here assume you have a trusted computer you regularly have access to, which is a usage pattern that seems to be disappearing with a majority of the population. You could consider your phone to be that trusted device, yet a phone can be lost or stolen more easily than a traditional workstation or even a laptop. And while KeePass has Android and iOS ports, those do not resolve the question of how to share the password storage among those devices or how to back them up. Password managers are fundamentally file-based, and the "file" concept seems to be quickly disappearing, faster than we technologists sometimes like to admit. Looking at some relatives' use of computers, I notice it is less about "files" than images, videos, recipes, and various abstract objects that are stored in the "cloud". They do not use local storage so much anymore. In that environment, password managers lose their primary advantage, which is a local, somewhat offline file storage that is not directly accessible to attackers. Therefore certain password managers are specifically designed for the cloud, like LastPass or web browser profile synchronization features, without necessarily addressing the inherent issues with cloud storage and opening up huge privacy and security issues that we absolutely need to address. This is where the "password hasher" design comes in. Also known as "stateless" or "deterministic" password managers, password hashers are emerging as a convenient solution that could possibly replace traditional password managers as users switch from generic computing platforms to cloud-based infrastructure. We will cover password hashers and the major security challenges they pose in a future article.
Note: this article first appeared in the Linux Weekly News.

Antoine Beaupr : A look at password managers

As we noted in an earlier article, passwords are a liability and we'd prefer to get rid of them, but the current reality is that we do use a plethora of passwords in our daily lives. This problem is especially acute for technology professionals, particularly system administrators, who have to manage a lot of different machines. But it also affects regular users who still use a large number of passwords, from their online bank to their favorite social-networking site. Despite the remarkable memory capacity of the human brain, humans are actually terrible at recalling even short sets of arbitrary characters with the precision needed for passwords. Therefore humans reuse passwords, make them trivial or guessable, write them down on little paper notes and stick them on their screens, or just reset them by email every time. Our memory is undeniably failing us and we need help, which is where password managers come in. Password managers allow users to store an arbitrary number of passwords and just remember a single password to unlock them all. But there is a large variety of password managers out there, so which one should we be using? At my previous job, an inventory was done of about 40 different free-software password managers in different stages of development and of varying quality. So, obviously, this article will not be exhaustive, but instead focus on a smaller set of some well-known options that may be interesting to readers.

KeePass: the popular alternative The most commonly used password-manager design pattern is to store passwords in a file that is encrypted and password-protected. The most popular free-software password manager of this kind is probably KeePass. An important feature of KeePass is the ability to auto-type passwords in forms, most notably in web browsers. This feature makes KeePass really easy to use, especially considering it also supports global key bindings to access passwords. KeePass databases are designed for simultaneous access by multiple users, for example, using a shared network drive. KeePass has a graphical interface written in C#, so it uses the Mono framework on Linux. A separate project, called KeePassX is a clean-room implementation written in C++ using the Qt framework. Both support the AES and Twofish encryption algorithms, although KeePass recently added support for the ChaCha20 cipher. AES key derivation is used to generate the actual encryption key for the database, but the latest release of KeePass also added using Argon2, which was the winner of the July 2015 password-hashing competition. Both programs are more or less equivalent, although the original KeePass seem to have more features in general. The KeePassX project has recently been forked into another project now called KeePassXC that implements a set of new features that are present in KeePass but missing from KeePassX like:
  • auto-type on Linux, Mac OS, and Windows
  • database merging which allows multi-user support
  • using the web site's favicon in the interface
So far, the maintainers of KeePassXC seem to be open to re-merging the project "if the original maintainer of KeePassX in the future will be more active and will accept our merge and changes". I can confirm that, at the time of writing, the original KeePassX project now has 79 pending pull requests and only one pull request was merged since the last release, which was 2.0.3 in September 2016. While KeePass and derivatives allow multiple users to access the same database through the merging process, they do not support multi-party access to a single database. This may be a limiting factor for larger organizations, where you may need, for example, a different password set for different technical support team levels. The solution in this case is to use separate databases for each team, with each team using a different shared secret.

Pass: the standard password manager? I am currently using password-store, or pass, as a password manager. It aims to be "the standard Unix password manager". Pass is a GnuPG-based password manager that features a surprising number of features given its small size:
  • copy-paste support
  • Git integration
  • multi-user/group support
  • pluggable extensions (in the upcoming 1.7 release)
The command-line interface is simple to use and intuitive. The following, will, for example, create a pass repository, a 20 character password for your LWN account and copy it to the clipboard:
    $ pass init
    $ pass generate -c lwn 20
The main issue with pass is that it doesn't encrypt the name of those entries: if someone were to compromise my machine, they could easily see which sites I have access to simply by listing the passwords stored in ~/.password-store. This is a deliberate design decision by the upstream project, as stated by a mailing list participant, Allan Odgaard:
Using a single file per item has the advantage of shell completion, using version control, browse, move and rename the items in a file browser, edit them in a regular editor (that does GPG, or manually run GPG first), etc.
Odgaard goes on to point out that there are alternatives that do encrypt the entire database (including the site names) if users really need that feature. Furthermore, there is a tomb plugin for pass that encrypts the password store in a LUKS container (called a "tomb"), although it requires explicitly opening and closing the container, which makes it only marginally better than using full disk encryption system-wide. One could also argue that password file names do not hold secret information, only the site name and username, perhaps, and that doesn't require secrecy. I do believe those should be kept secret, however, as they could be used to discover (or prove) which sites you have access to and then used to perform other attacks. One could draw a parallel with the SSH known_hosts file, which used to be plain text but is now hashed so that hosts are more difficult to discover. Also, sharing a database for multi-user support will require some sort of file-sharing mechanism. Given the integrated Git support, this will likely involve setting up a private Git repository for your team, something which may not be accessible to the average Linux user. Nothing keeps you, however, from sharing the ~/.password-store directory through another file sharing mechanism like (say) Syncthing or Dropbox). You can use multiple distinct databases easily using the PASSWORD_STORE_DIR environment variable. For example, you could have a shell alias to use a different repository for your work passwords with:
    alias work-pass="PASSWORD_STORE_DIR=~/work-passwords pass"
Group support comes from a clever use of the GnuPG multiple-recipient encryption support. You simply have to specify multiple OpenPGP identities when initializing the repository, which also works in subdirectories:
    $ pass init -p Ateam me@example.com joelle@example.com
    mkdir: created directory '/home/me/.password-store/Ateam'
    Password store initialized for me@example.com, joelle@example.com
    [master 0e3dbe7] Set GPG id to me@example.com, joelle@example.com.
     1 file changed, 2 insertions(+)
     create mode 100644 Ateam/.gpg-id
The above will configure pass to encrypt the passwords in the Ateam directory for me@example.com and joelle@example.com. Pass depends on GnuPG to do the right thing when encrypting files and how those identities are treated is entirely delegated to GnuPG's default configuration. This could lead to problems if arbitrary keys can be injected into your key ring, which could confuse GnuPG. I would therefore recommend using full key fingerprints instead of user identifiers. Regarding the actual encryption algorithms used, in my tests, GnuPG 1.4.18 and 2.1.18 seemed to default to 256-bit AES for encryption, but that has not always been the case. The chosen encryption algorithm actually depends on the recipient's key preferences, which may vary wildly: older keys and versions may use anything from 128-bit AES to CAST5 or Triple DES. To figure out which algorithm GnuPG chose, you may want to try this pipeline:
    $ echo test   gpg -e -r you@example.com   gpg -d -v
    [...]
    gpg: encrypted with 2048-bit RSA key, ID XXXXXXX, created XXXXX
      "You Person You <you@example.com>"
    gpg: AES256 encrypted data
    gpg: original file name=''
    test
As you can see, pass is primarily a command-line application, which may make it less accessible to regular users. The community has produced different graphical interfaces that are either using pass directly or operate on the storage with their own GnuPG integration. I personally use pass in combination with Rofi to get quick access to my passwords, but less savvy users may want to try the QtPass interface, which should be more user-friendly. QtPass doesn't actually depend on pass and can use GnuPG directly to interact with the pass database; it is available for Linux, BSD, OS X, and Windows.

Browser password managers Most users are probably already using a password manager through their web browser's "remember password" functionality. For example, Chromium will ask if you want it to remember passwords and encrypt them with your operating system's facilities. For Windows, this encrypts the passwords with your login password and, for GNOME, it will store the passwords in the gnome-keyring storage. If you synchronize your Chromium settings with your Google account, Chromium will store those passwords on Google's servers, encrypted with a key that is stored in the Google Account itself. So your passwords are then only as safe as your Google account. Note that this was covered here in 2010, although back then Chromium didn't synchronize with the Google cloud or encrypt with the system-level key rings. That facility was only added in 2013. In Firefox, there's an optional, profile-specific master password that unlocks all passwords. In this case, the issue is that browsers are generally always open, so the vault is always unlocked. And this is for users that actually do pick a master password; users are often completely unaware that they should set one. The unlocking mechanism is a typical convenience-security trade-off: either users need to constantly input their master passwords to login or they don't, and the passwords are available in the clear. In this case, Chromium's approach of actually asking users to unlock their vault seems preferable, even though the developers actually refused to implement the feature for years. Overall, I would recommend against using a browser-based password manager. Even if it is not used for critical sites, you will end up with hundreds of such passwords that are vulnerable while the browser is running (in the case of Firefox) or at the whim of Google (in the case of Chromium). Furthermore, the "auto-fill" feature that is often coupled with browser-based password managers is often vulnerable to serious attacks, which is mentioned below. Finally, because browser-based managers generally lack a proper password generator, users may fail to use properly generated passwords, so they can then be easily broken. A password generator has been requested for Firefox, according to this feature request opened in 2007, and there is a password generator in Chrome, but it is disabled by default and hidden in the mysterious chrome://flags URL.

Other notable password managers Another alternative password manager, briefly mentioned in the previous article, is the minimalistic Assword password manager that, despite its questionable name, is also interesting. Its main advantage over pass is that it uses a single encrypted JSON file for storage, and therefore doesn't leak the name of the entries by default. In addition to copy/paste, Assword also supports automatically entering passphrases in fields using the xdo library. Like pass, it uses GnuPG to encrypt passphrases. According to Assword maintainer Daniel Kahn Gillmor in email, the main issue with Assword is "interaction between generated passwords and insane password policies". He gave the example of the Time-Warner Cable registration form that requires, among other things, "letters and numbers, between 8 and 16 characters and not repeat the same characters 3 times in a row". Another well-known password manager is the commercial LastPass service which released a free-software command-line client called lastpass-cli about three years ago. Unfortunately, the server software of the lastpass.com service is still proprietary. And given that LastPass has had at least two serious security breaches since that release, one could legitimately question whether this is a viable solution for storing important secrets. In general, web-based password managers expose a whole new attack surface that is not present in regular password managers. A 2014 study by University of California researchers showed that, out of five password managers studied, every one of them was vulnerable to at least one of the vulnerabilities studied. LastPass was, in particular, vulnerable to a cross-site request forgery (CSRF) attack that allowed an attacker to bypass account authentication and access the encrypted database.

Problems with password managers When you share a password database within a team, how do you remove access to a member of the team? While you can, for example, re-encrypt a pass database with new keys (thereby removing or adding certain accesses) or change the password on a KeePass database, a hostile party could have made a backup of the database before the revocation. Indeed, in the case of pass, older entries are still in the Git history. So access revocation is a problematic issue found with all shared password managers, as it may actually mean going through every password and changing them online. This fundamental problem with shared secrets can be better addressed with a tool like Vault or SFLvault. Those tools aim to provide teams with easy ways to store dynamic tokens like API keys or service passwords and share them not only with other humans, but also make them accessible to machines. The general idea of those projects is to store secrets in a central server and send them directly to relevant services without human intervention. This way, passwords are not actually shared anymore, which is similar in spirit to the approach taken by centralized authentication systems like Kerberos). If you are looking at password management for teams, those projects may be worth a look. Furthermore, some password managers that support auto-typing were found to be vulnerable to HTML injection attacks: if some third-party ad or content is able to successfully hijack the parent DOM content, it masquerades as a form that could fool auto-typing software as demonstrated by this paper that was submitted at USENIX 2014. Fortunately, KeePass was not vulnerable according to the security researchers, but LastPass was, again, vulnerable.

Future of password managers? All of the solutions discussed here assume you have a trusted computer you regularly have access to, which is a usage pattern that seems to be disappearing with a majority of the population. You could consider your phone to be that trusted device, yet a phone can be lost or stolen more easily than a traditional workstation or even a laptop. And while KeePass has Android and iOS ports, those do not resolve the question of how to share the password storage among those devices or how to back them up. Password managers are fundamentally file-based, and the "file" concept seems to be quickly disappearing, faster than we technologists sometimes like to admit. Looking at some relatives' use of computers, I notice it is less about "files" than images, videos, recipes, and various abstract objects that are stored in the "cloud". They do not use local storage so much anymore. In that environment, password managers lose their primary advantage, which is a local, somewhat offline file storage that is not directly accessible to attackers. Therefore certain password managers are specifically designed for the cloud, like LastPass or web browser profile synchronization features, without necessarily addressing the inherent issues with cloud storage and opening up huge privacy and security issues that we absolutely need to address. This is where the "password hasher" design comes in. Also known as "stateless" or "deterministic" password managers, password hashers are emerging as a convenient solution that could possibly replace traditional password managers as users switch from generic computing platforms to cloud-based infrastructure. We will cover password hashers and the major security challenges they pose in a future article.
Note: this article first appeared in the Linux Weekly News.

31 December 2016

Jonathan McDowell: IMDB Top 250: Complete. Sort of.

Back in 2010, inspired by Juliet, I set about doing 101 things in 1001 days. I had various levels of success, but one of the things I did complete was the aim of watching half of the IMDB Top 250. I didn t stop at that point, but continued to work through it at a much slower pace until I realised that through the Queen s library I had access to quite a few DVDs of things I was missing, and that it was perfectly possible to complete the list by the end of 2016. So I did. I should point out that I didn t set out to watch the list because I m some massive film buff. It was more a mixture of watching things that I wouldn t otherwise choose to, and also watching things I knew were providing cultural underpinnings to films I had already watched and enjoyed. That said, people have asked for some sort of write up when I was done. So here are some random observations, which are almost certainly not what they were looking for.

My favourite film is not in the Top 250 First question anyone asks is What s your favourite film? . That depends a lot on what I m in the mood for really, but fairly consistently my answer is The Hunt for Red October. This has never been in the Top 250 that I ve noticed. Which either says a lot about my taste in films, or the Top 250, or both. Das Boot was in the list and I would highly recommend it (but then I like all submarine movies it seems).

The Shawshank Redemption is overrated I can t recall a time when The Shawshank Redemption was not top of the list. It s a good film, and I ve watched it many times, but I don t think it s good enough to justify its seemingly unbroken run. I don t have a suggestion for a replacement, however.

The list is constantly changing I say I ve completed the Top 250, but that s working from a snapshot I took back in 2010. Today the site is telling me I ve watched 215 of the current list. Last night it was 214 and I haven t watched anything in between. Some of those are films released since 2010 (in particular new releases often enter high and then fall out of the list over a month or two), but the current list has films as old as 1928 (The Passion of Joan of Arc) that weren t there back in 2010. So keeping up to date is not simply a matter of watching new releases.

The best way to watch the list is terrestrial TV There were various methods I used to watch the list. Some I d seen in the cinema when they came out (or was able to catch that way anyway - the QFT showed Duck Soup, for example). Netflix and Amazon Video had some films, but overall a very disappointing percentage. The QUB Library, as previously mentioned, had a good number of DVDs on the list (especially the older things). I ended up buying a few (Dial M for Murder on 3D Bluray was well worth it; it s beautifully shot and unobtrusively 3D), borrowed a few from friends and ended up finishing off the list by a Lovefilm one month free trial. The single best source, however, was UK terrestrial TV. Over the past 6 years Freeview (the free-to-air service here) had the highest percentage of the list available. Of course this requires some degree of organisation to make sure you don t miss things.

Films I enjoyed Not necessarily my favourite, but things I wouldn t have necessarily watched and was pleasantly surprised by. No particular order, and I m leaving out a lot of films I really enjoyed but would have got around to watching anyway.
  • Clint Eastwood films - Gran Torino and Million Dollar Baby were both excellent but neither would have appealed to me at first glance. I hated Unforgiven though.
  • Jimmy Stewart. I m not a fan of It s a Wonderful Life (which I d already watched because it s Lister s favourite film), but Harvey is obviously the basis of lots of imaginary friend movies and Rear Window explained a Simpsons episode (there were a lot of Simpsons episodes explained by watching the list).
  • Spaghetti Westerns. I wouldn t have thought they were my thing, but I really enjoyed the Sergio Leone films (A Fistful of Dollars etc.). You can see where Tarantino gets a lot of his inspiration.
  • Foreign language films. I wouldn t normally seek these out. And in general it seems I cannot get on with Italian films (except Life is Beautiful), but Amores Perros, Amelie and Ikiru were all better than expected.
  • Kind Hearts and Coronets. For some reason I didn t watch this until almost the end; I think the title always put me off. Turned out to be very enjoyable.

Films I didn t enjoy I m sure these mark me out as not being a film buff, but there are various things I would have turned off if I d caught them by accident rather than setting out to watch them. I ve kept the full list available, if you re curious.

6 November 2016

Russ Allbery: Review: Digger

Review: Digger, by Ursula Vernon
Publisher: Sofawolf
Copyright: October 2013
ISBN: 1-936689-32-4
Format: Graphic novel
Pages: 837
As Digger opens, the eponymous wombat is digging a tunnel. She's not sure why, or where to, since she hit a bad patch of dirt. It happens sometimes, underground: pockets of cave gas and dead air that leave one confused and hallucinating. But this one was particularly bad, it's been days, she broke into a huge cave system, and she's thoroughly lost. Tripping on an ammonite while running from voices in the dark finally helps her come mostly to her senses and start tunneling up, only to break out at the feet of an enormous statue of Ganesh. A talking statue of Ganesh. Digger is a web comic that ran from 2005 to 2011. The archives are still on the web, so you can read the entire saga for free. Reviewed here is the complete omnibus edition, which collects the entire strip (previously published in six separate graphic novels containing two chapters each), a short story, a bonus story that was published in volume one, a bunch of random illustrated bits about the world background, author's notes from the web version, and all of the full-color covers of the series chapters (the rest of the work is in black and white). Publication of the omnibus was originally funded by a Kickstarter, but it's still available for regular sale. (I bought it normally via Amazon long after the Kickstarter finished.) It's a beautiful and durable printing, and I recommend it if you have the money to buy things you can read for free. This was a very long-running web comic, but Digger is a single story. It has digressions, of course, but it's a single coherent work with a beginning, middle, and end. That's one of the impressive things about it. Another is that it's a fantasy work involving gods, magic, oracles, and prophecies, but it's not about a chosen one, and it's not a coming of age story. Digger (Digger-of-Needlessly-Convoluted-Tunnels, actually, but Digger will do) is an utterly pragmatic wombat who considers magic to be in poor taste (as do all right-thinking wombats), gods to be irritating underground obstacles that require care and extra bracing, and prophecies to not be worth the time spent listening to them. It's a bit like the famous Middle Earth contrast between the concerns of the hobbits and the affairs of the broader world, if the hobbits were well aware of the broader world, able to deal with it, but just thought all the magic was tacky and irritating. Magic and gods do not, of course, go away just because one is irritated by them, and Digger eventually has to deal with quite a lot of magic and mythology while trying to figure out where home is and how to get back to it. However, she is drawn into the plot less by any grand danger to the world and more because she keeps managing to make friends with everyone, even people who hate each other. It's not really an explicit goal, but Digger is kind-hearted, sensible, tries hard to do the right thing, and doesn't believe in walking away from problems. In this world, that's a recipe for eventual alliances from everything from warrior hyenas to former pirate shrews, not to mention a warrior cult, a pair of trolls, and a very confused shadow... something. All for a wombat who would rather be digging out a good root cellar. (She does, at least, get a chance to dig out a good root cellar.) The characters are the best part, but I love everything about this story. Vernon's black and white artwork isn't as detailed as, say, Dave Sim at his best, and some of the panels (particularly mostly dark ones) seemed a bit scribbly. But it's mostly large-panel artwork with plenty of room for small touches and Easter eggs (watch for the snail, and the cave fish graffiti that I missed until it was pointed out by the author's notes), and it does the job of telling the story. Honestly, I like the black and white panels better than the color chapter covers reproduced in the back. And the plot is solid and meaty, with a satisfying ending and some fantastic detours (particularly the ghosts). I think my favorite bits, though, are the dialogue.
"Do you have any idea how long twelve thousand years is?"
"I know it's not long enough to make a good rock."
Digger is snarky in all the right ways, and sees the world in terms of tunnels, digging, and geology. Vernon is endlessly creative in how she uses that to create comebacks, sayings, analysis, and an entire culture. This is one of the best long-form comics I've read: a solid fantasy story with great characters, reliably good artwork, a coherent plot arc, wonderful dialogue, a hard-working and pragmatic protagonist (who happens to be female), and a wonderfully practical sense of morality and ethics. I'm sorry it's over. If you've not already read it, I highly recommend it. Remember tunnel 17! Rating: 9 out of 10

1 October 2016

Kees Cook: security things in Linux v4.6

Previously: v4.5. The v4.6 Linux kernel release included a bunch of stuff, with much more of it under the KSPP umbrella. seccomp support for parisc Helge Deller added seccomp support for parisc, which including plumbing support for PTRACE_GETREGSET to get the self-tests working. x86 32-bit mmap ASLR vs unlimited stack fixed Hector Marco-Gisbert removed a long-standing limitation to mmap ASLR on 32-bit x86, where setting an unlimited stack (e.g. ulimit -s unlimited ) would turn off mmap ASLR (which provided a way to bypass ASLR when executing setuid processes). Given that ASLR entropy can now be controlled directly (see the v4.5 post), and that the cases where this created an actual problem are very rare, means that if a system sees collisions between unlimited stack and mmap ASLR, they can just adjust the 32-bit ASLR entropy instead. x86 execute-only memory Dave Hansen added Protection Key support for future x86 CPUs and, as part of this, implemented support for execute only memory in user-space. On pkeys-supporting CPUs, using mmap(..., PROT_EXEC) (i.e. without PROT_READ) will mean that the memory can be executed but cannot be read (or written). This provides some mitigation against automated ROP gadget finding where an executable is read out of memory to find places that can be used to build a malicious execution path. Using this will require changing some linker behavior (to avoid putting data in executable areas), but seems to otherwise Just Work. I m looking forward to either emulated QEmu support or access to one of these fancy CPUs. CONFIG_DEBUG_RODATA enabled by default on arm and arm64, and mandatory on x86 Ard Biesheuvel (arm64) and I (arm) made the poorly-named CONFIG_DEBUG_RODATA enabled by default. This feature controls whether the kernel enforces proper memory protections on its own memory regions (code memory is executable and read-only, read-only data is actually read-only and non-executable, and writable data is non-executable). This protection is a fundamental security primitive for kernel self-protection, so making it on-by-default is required to start any kind of attack surface reduction within the kernel. On x86 CONFIG_DEBUG_RODATA was already enabled by default, but, at Ingo Molnar s suggestion, I made it mandatory: CONFIG_DEBUG_RODATA cannot be turned off on x86. I expect we ll get there with arm and arm64 too, but the protection is still somewhat new on these architectures, so it s reasonable to continue to leave an out for developers that find themselves tripping over it. arm64 KASLR text base offset Ard Biesheuvel reworked a ton of arm64 infrastructure to support kernel relocation and, building on that, Kernel Address Space Layout Randomization of the kernel text base offset (and module base offset). As with x86 text base KASLR, this is a probabilistic defense that raises the bar for kernel attacks where finding the KASLR offset must be added to the chain of exploits used for a successful attack. One big difference from x86 is that the entropy for the KASLR must come either from Device Tree (in the /chosen/kaslr-seed property) or from UEFI (via EFI_RNG_PROTOCOL), so if you re building arm64 devices, make sure you have a strong source of early-boot entropy that you can expose through your boot-firmware or boot-loader. zero-poison after free Laura Abbott reworked a bunch of the kernel memory management debugging code to add zeroing of freed memory, similar to PaX/Grsecurity s PAX_MEMORY_SANITIZE feature. This feature means that memory is cleared at free, wiping any sensitive data so it doesn t have an opportunity to leak in various ways (e.g. accidentally uninitialized structures or padding), and that certain types of use-after-free flaws cannot be exploited since the memory has been wiped. To take things even a step further, the poisoning can be verified at allocation time to make sure that nothing wrote to it between free and allocation (called sanity checking ), which can catch another small subset of flaws. To understand the pieces of this, it s worth describing that the kernel s higher level allocator, the page allocator (e.g. __get_free_pages()) is used by the finer-grained slab allocator (e.g. kmem_cache_alloc(), kmalloc()). Poisoning is handled separately in both allocators. The zero-poisoning happens at the page allocator level. Since the slab allocators tend to do their own allocation/freeing, their poisoning happens separately (since on slab free nothing has been freed up to the page allocator). Only limited performance tuning has been done, so the penalty is rather high at the moment, at about 9% when doing a kernel build workload. Future work will include some exclusion of frequently-freed caches (similar to PAX_MEMORY_SANITIZE), and making the options entirely CONFIG controlled (right now both CONFIGs are needed to build in the code, and a kernel command line is needed to activate it). Performing the sanity checking (mentioned above) adds another roughly 3% penalty. In the general case (and once the performance of the poisoning is improved), the security value of the sanity checking isn t worth the performance trade-off. Tests for the features can be found in lkdtm as READ_AFTER_FREE and READ_BUDDY_AFTER_FREE. If you re feeling especially paranoid and have enabled sanity-checking, WRITE_AFTER_FREE and WRITE_BUDDY_AFTER_FREE can test these as well. To perform zero-poisoning of page allocations and (currently non-zero) poisoning of slab allocations, build with:
CONFIG_DEBUG_PAGEALLOC=n
CONFIG_PAGE_POISONING=y
CONFIG_PAGE_POISONING_NO_SANITY=y
CONFIG_PAGE_POISONING_ZERO=y
CONFIG_SLUB_DEBUG=y
and enable the page allocator poisoning and slab allocator poisoning at boot with this on the kernel command line:
page_poison=on slub_debug=P
To add sanity-checking, change PAGE_POISONING_NO_SANITY=n, and add F to slub_debug as slub_debug=PF . read-only after init I added the infrastructure to support making certain kernel memory read-only after kernel initialization (inspired by a small part of PaX/Grsecurity s KERNEXEC functionality). The goal is to continue to reduce the attack surface within the kernel by making even more of the memory, especially function pointer tables, read-only (which depends on CONFIG_DEBUG_RODATA above). Function pointer tables (and similar structures) are frequently targeted by attackers when redirecting execution. While many are already declared const in the kernel source code, making them read-only (and therefore unavailable to attackers) for their entire lifetime, there is a class of variables that get initialized during kernel (and module) start-up (i.e. written to during functions that are marked __init ) and then never (intentionally) written to again. Some examples are things like the VDSO, vector tables, arch-specific callbacks, etc. As it turns out, most architectures with kernel memory protection already delay making their data read-only until after __init (see mark_rodata_ro()), so it s trivial to declare a new data section ( .data..ro_after_init ) and add it to the existing read-only data section ( .rodata ). Kernel structures can be annotated with the new section (via the __ro_after_init macro), and they ll become read-only once boot has finished. The next step for attack surface reduction infrastructure will be to create a kernel memory region that is passively read-only, but can be made temporarily writable (by a single un-preemptable CPU), for storing sensitive structures that are written to only very rarely. Once this is done, much more of the kernel s attack surface can be made read-only for the majority of its lifetime. As people identify places where __ro_after_init can be used, we can grow the protection. A good place to start is to look through the PaX/Grsecurity patch to find uses of __read_only on variables that are only written to during __init functions. The rest are places that will need the temporarily-writable infrastructure (PaX/Grsecurity uses pax_open_kernel()/pax_close_kernel() for these). That s it for v4.6, next up will be v4.7!

2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

17 September 2016

Jonas Meurer: data recovery

Data recovery with ddrescue, testdisk and sleuthkit From time to time I need to recover data from disks. Reasons can be broken flash/hard disks as well as accidently deleted files. Fortunately, this doesn't happen to often, which on the downside means that I usually don't remember the details about best practice. Now that a good friend asked me to recover very important data from a broken flash disk, I take the opportunity to write down what I did and hopefully don't need to read the same docs again next time :) Disclaimer: I didn't take the time to read through full documentation. This is rather a brief summary of the best practice to my knowledge, not a sophisticated and detailed explanation of data recovery techniques. Create image with ddrescue First and most secure rule for recovery tasks: don't work on the original, use a copied image instead. This way you can do, whatever you want without risking further data loss. The perfect tool for this is GNU ddrescue. Contrary to dd, it doesn't reiterate over a broken sector with I/O errors again and again while copying. Instead, it remembers the broken sector for later and goes on to the next sector first. That way, all sectors that can be read without errors are copied first. This is particularly important as every extra attempt to read a broken sector can further damage the source device, causing even more data loss. In Debian, ddrescue is available in the gddrescue package:
apt-get install gddrescue
Copying the raw disk content to an image with ddrescue is as easy as:
ddrescue /dev/disk disk-backup.img disk.log
Giving a logfile as third argument has the great advantage that you can interupt ddrescue at any time and continue the copy process later, possibly with different options. In case of very large disks where only the first part was in use, it might be useful to start with copying the beginning only:
ddrescue -i0 -s20MiB /dev/disk disk-backup.img disk.log
In case of errors after the first run, you should start ddrescue again with direct read access (-d) and tell it to try again bad sectors three times (-r3):
ddrescue -d -r3 /dev/disk disk-backup.img disk.log
If some sectors are still missing afterwards, it might help to run ddrescue with infinite retries for some time (e.g. one night):
ddrescue -d -r-1  /dev/disk disk-backup.img disk.log

Inspect the image Now that you have an image of the raw disk, you can take a first look at what it contains. If ddrescue was able to recover all sectors, chances are high that no further magic is required and all data is there. If the raw disk (used to) contain a partition table, take a first look with mmls from sleuthkit:
mmls disk-backup.img
In case of a intact partition table, you can try to create device maps with kpartx after setting up a loop device for the image file:
losetup /dev/loop0 disk-backup.img
kpartx -a /dev/loop0
If kpartx finds partitions, they will be made available at /dev/mapper/loop0p1, /dev/mapper/loop0p2 and so on. Search for filesystems on the partitions with fsstat from sleuthkit on the partition device map:
fsstat /dev/mapper/loop0p1
Or directly on the image file with the offset discovered by mmls earlier. This also might work in case of
fsstat -o 8064 disk-backup.img
The offset obviously is not needed if the image contains a partition dump (without partition table):
fsstat disk-backup.img
In case that a filesystem if found, simply try to mount it:
mount -t <fstype> -o ro /dev/mapper/loop0p1 /mnt
or
losetup -o 8064 /dev/loop1 disk-backup.img
mount -t <fstype> -o ro /dev/loop1 /mnt

Recover partition table If the partition table is broken, try to recover it with testdisk. But first, create a second copy of the image, as you will alter it now:
ddrescue disk-backup.img disk-backup2.img
testdisk disk-backup2.img
In testdisk, select a media (e.g. Disk disk-backup2.img) and proceed, then select the partition table type (usually Intel or EFI GPT) and analyze -> quick search. If partitions are found, select one or more and write the partition structure to disk.

Recover files Finally, let's try to recover the actual files from the image. testdisk If the partition table recovery was sucessfull, try to undelete files from within testdisk. Go back to the main menu and select advanced -> undelete. photorec Another option is to use the photorec tool that comes with testdisk. It searches the image for known file structures directly, ignoring possible filesystems:
photorec sdb2.img
You have to select either a particular partition or the whole disk, a file system (ext2/ext3 vs. other) and a destination for recovered files. Last time, photorec was my last resort as the fat32 filesystem was so damaged that testdisk detected only an empty filesystem. sleuthkit sleuthkit also ships with tools to undelete files. I tried fls and icat. fls searches for and lists files and directories in the image, searching for parts of the former filesystem. icat copies the files by their inode numer. Last time I tried, fls and icat didn't recover any new files compared to photorec. Still, for the sake of completeness, I document what I did. First, I invoked fls in order to search for files:
fls -f fat32 -o 8064 -pr disk-backup.img
Then, I tried to backup one particular file from the list:
icat -f fat32 -o 8064 <INODE>
Finally, I used the recoup.pl script from Dave Henk in order to batch-recover all discovered files:
wget http://davehenk.googlepages.com/recoup.pl
chmod +x recoup.pl
vim recoup.pl
[...]
my $fullpath="~/recovery/sleuthkit/";
my $FLS="/usr/bin/fls";
my @FLS_OPT=("-f","fat32","-o","8064","-pr","-m $fullpath","-s 0");
my $FLS_IMG="~/recovery/disk-image.img";
my $ICAT_LOG="~/recovery/icat.log";
my $ICAT="/usr/bin/icat";
my @ICAT_OPT=("-f","fat32","-o","8064");
[...]
Further down, the double quotes around $fullfile needed to be replaced by single quotes (at least in my case, as $fullfile contained a subdir called '$OrphanFiles'):
system("$ICAT @ICAT_OPT $ICAT_IMG $inode > \'$fullfile\' 2>> $ICAT_LOG") if ($inode != 0);
That's it for now. Feel free to comment with suggestions on how to further improve the process of recovering data from broken disks.

13 September 2016

John Goerzen: Two Boys, An Airplane, Plus Hundreds of Old Computers

Was there anything you didn t like about our trip? Jacob s answer: That we had to leave so soon! That s always a good sign. When I first heard about the Vintage Computer Festival Midwest, I almost immediately got the notion that I wanted to go. Besides the TRS-80 CoCo II up in my attic, I also have fond memories of an old IBM PC with CGA monitor, a 25MHz 486, an Alpha also in my attic, and a lot of other computers along the way. I didn t really think my boys would be interested. But I mentioned it to them, and they just lit up. They remembered the Youtube videos I d shown them of old line printers and punch card readers, and thought it would be great fun. I thought it could be a great educational experience for them too and it was. It also turned into a trip that combined being a proud dad with so many of my other interests. Quite a fun time. IMG_20160911_061456 (Jacob modeling his new t-shirt) Captain Jacob Chicago being not all that close to Kansas, I planned to fly us there. If you re flying yourself, solid flight planning is always important. I had already planned out my flight using electronic tools, but I always carry paper maps with me in the cockpit for backup. I got them out and the boys and I planned out the flight the old-fashioned way. Here s Oliver using a scale ruler (with markings for miles corresponding to the scale of the map) and Jacob doing calculating for us. We measured the entire route and came to within one mile of the computer s calculation for each segment those boys are precise! 20160904_175519 We figured out how much fuel we d use, where we d make fuel stops, etc. The day of our flight, we made it as far as Davenport, Iowa when a chance of bad weather en route to Chicago convinced me to land there and drive the rest of the way. The boys saw that as part of the exciting adventure! Jacob is always interested in maps, and had kept wanting to use my map whenever we flew. So I dug an old Android tablet out of the attic, put Avare on it (which has aviation maps), and let him use that. He was always checking it while flying, sometimes saying this over his headset: DING. Attention all passengers, this is Captain Jacob speaking. We are now 45 miles from St. Joseph. Our altitude is 6514 feet. Our speed is 115 knots. We will be on the ground shortly. Thank you. DING Here he is at the Davenport airport, still busy looking at his maps: IMG_20160909_183813 Every little airport we stopped at featured adults smiling at the boys. People enjoyed watching a dad and his kids flying somewhere together. Oliver kept busy too. He loves to help me on my pre-flight inspections. He will report every little thing to me a scratch, a fleck of paint missing on a wheel cover, etc. He takes it seriously. Both boys love to help get the plane ready or put it away. The Computers Jacob quickly gravitated towards a few interesting things. He sat for about half an hour watching this old Commodore plotter do its thing (click for video): VID_20160910_142044 His other favorite thing was the phones. Several people had brought complete analog PBXs with them. They used them to demonstrate various old phone-related hardware; one had several BBSs running with actual modems, another had old answering machines and home-security devices. Jacob learned a lot about phones, including how to operate a rotary-dial phone, which he d never used before! IMG_20160910_151431 Oliver was drawn more to the old computers. He was fascinated by the IBM PC XT, which I explained was just about like a model I used to get to use sometimes. They learned about floppy disks and how computers store information. IMG_20160910_195145 He hadn t used joysticks much, and found Pong ( this is a soccer game! ) interesting. Somebody has also replaced the guts of a TRS-80 with a Raspberry Pi running a SNES emulator. This had thoroughly confused me for a little while, and excited Oliver. Jacob enjoyed an old TRS-80, which, through a modern Ethernet interface and a little computation help in AWS, provided an interface to Wikipedia. Jacob figured out the text-mode interface quickly. Here he is reading up on trains. IMG_20160910_140524 I had no idea that Commodore made a lot of adding machines and calculators before they got into the home computer business. There was a vast table with that older Commodore hardware, too much to get on a single photo. But some of the adding machines had their covers off, so the boys got to see all the little gears and wheels and learn how an adding machine can do its printing. IMG_20160910_145911 And then we get to my favorite: the big iron. Here is a VAX a working VAX. When you have a computer that huge, it s easier for the kids to understand just what something is. IMG_20160910_125451 When we encountered the table from the Glenside Color Computer Club, featuring the good old CoCo IIs like what I used as a kid (and have up in my attic), I pointed out to the boys that we have a computer just like this that can do these things and they responded wow! I think they are eager to try out floppy disks and disk BASIC now. Some of my favorites were the old Unix systems, which are a direct ancestor to what I ve been working with for decades now. Here s AT&T System V release 3 running on its original hardware: IMG_20160910_144923 And there were a couple of Sun workstations there, making me nostalgic for my college days. If memory serves, this one is actually running on m68k in the pre-Sparc days: IMG_20160910_153418 Returning home After all the excitement of the weekend, both boys zonked out for awhile on the flight back home. Here s Jacob, sleeping with his maps still up. IMG_20160911_132952 As we were nearly home, we hit a pocket of turbulence, the kind that feels as if the plane is dropping a bit (it s perfectly normal and safe; you ve probably felt that on commercial flights too). I was a bit concerned about Oliver; he is known to get motion sick in cars (and even planes sometimes). But what did I hear from Oliver? Whee! That was fun! It felt like a roller coaster! Do it again, dad!

12 June 2016

Iustin Pop: Elsa Bike Trophy 2016 my first bike race!

Elsa Bike Trophy 2016 my first bike race! So today, after two months of intermittent training using Zwift and some actual outside rides, I did my first bike race. Not of 2016, not of 2000+, but like ever. Which is strange, as I learned biking very young, and I did like to bike. But as it turned out, even though I didn't like running as a child, I did participate in a number of running events over the years, but no biking ones. The event Elsa Bike Trophy is a mountain bike event cross-country, not downhill or anything crazy; it takes part in Estavayer-le-Lac, and has two courses - one 60Km with 1'791m altitude gain, and a smaller one of 30Km with 845m altitude gain. I went, of course, for the latter. 845m is more than I ever did in a single ride, so it was good enough for a first try. The web page says that this smaller course est nerveux, technique et ne laisse que peu de r pit . I choose to think that's a bit of an exaggeration, and that it will be relatively easy (as I'm not too skilled technically). The atmosphere there was like for the running races, with the exception of bike stuff being sold, and people on very noisy rollers. I'm glad for my trainer which sounds many decibels quieter The long race started at 12:00, and the shorter one at 12:20. While waiting for the start I had to concerns in mind: whether I'm able to do the whole course (endurance), and whether it will be too cold (the weather kept moving towards rain). I had a small concern about the state of the course, as it was not very nice weather recently, but a small one. And then, after one hour plus of waiting, go! Racing, with a bit of "swimming" At first thing went as expected. Starting on paved roads, moving towards the small town exit, a couple of 14% climbs, then more flat roads, then a nice and hard 18% short climb (I'll never again complain about < 10%!), then entering the woods. It became quickly apparent that the ground in the forest was in much worse state than I feared. Much worse as in a few orders of magnitude. In about 5 minutes after entering the tree cover, my reasonably clean, reasonably light bike became a muddy, heavy monster. And the pace that until then went quite OK became walking pace, as the first rider that didn't manage to keep going up because the wheel turned out of the track blocked the one behind him, which had to stop, and repeat until we were one line (or two, depending on how wide the trail was) of riders walking their bikes up. While on dry ground walking your bike up is no problem, or hiking through mud with good hiking shoes is also no problem, walking up with biking shoes is a pain. Your foot slides and you waste half of your energy "swimming" in the mud. Once the climb is over, you get on the bike, and of course the pedals and cleats are full of heavy mud, so it takes a while until you can actually clip in. Here the trail version of SPD was really useful, as I could pedal reasonably well without being clipped-in, just had to be careful and push too hard. Then maybe you exit the trail and get on paved road, but the wheels are so full of mud that you still are very slow (and accelerate very slowly), until the shed enough of the mud to become somewhat more "normal". After a bit of this "up through mud, flat and shedding mud", I came upon the first real downhill section. I would have been somewhat confident in dry ground, but I got scared and got off my bike. Better safe than sorry was the thing for now. And after this is was a repetition of the above: climb, sometimes (rarely) on the bike, most times pushing the bike, fast flat sections through muddy terrain where any mistake of controlling the bike can send the front wheel flying due to the mud being highly viscous, slow flat sections through very liquid mud where it definitely felt like swimming, or any dry sections. My biggest fear, uphill/endurance, was unfounded. The most gains I made were on the dry uphills, where I had enough stamina to overtake. On flat ground I mostly kept order (i.e. neither being overtaken nor overtaking), but on downhill sections, I lost lots of time, and was overtaken a lot. Still, it was a good run. And then, after about 20 kilometres out of the 30, I got tired enough of getting off the bike, on the bike, and also tired mentally and not being careful enough, that I stopped getting off the bike on downhills. And the feeling was awesome! It was actually much much easier to flow through the mud and rocks and roots on downhill, even when it was difficult (for me) like 40cm drops (estimated), than doing it on foot, where you slide without control and the bike can come crashing down on you. It was a liberating feel, like finally having overcome the mud. I was soo glad to have done a one-day training course with Swiss Alpine Adventure, as it really helped. Thanks Dave! Of course, people were still overtaking me, but I also overtook some people (who were on foot; he he, I wasn't the only one it seems). And being easier, I had some more energy so I was able to push a bit harder on the flats and dry uphill sections. And then the remaining distance started shrinking, and the last downhill was over, I entered through familiar roads the small town, a passer-by cries "one kilometre left", I push hard (I mean, hard as I could after all the effort), and I reach the finish. Oh, and my other concern, the rain? Yes it did rain somewhat, and I was glad for it (I keep overheating); there was a single moment I felt cold, when exiting a nice cosy forest into a field where the wind was very strong headwind, of course. Lessons learned I did learn a lot in this first event. Results So, how did I do after all? As soon as I reached the finish and recovered my items, among which the phone, I checked the datasport page: I was rank 59/68 in my category. Damn, I hoped (and thought) I would do better. Similar % in the overall ranking for this distance. That aside, it was mighty fun. So much fun I'd do it again tomorrow! I forgot the awesome atmosphere of such events, even in the back of the rankings. And then, after I reach drive home and open on my workstation the datasport page, I get very confused: the overall number of participants was different. And the I realised: not everybody finished the race when I first checked (d'oh)! Final ranking: 59 out of 84 in my category, and 247/364 in the overall 30km rankings. That makes it 70% and 67% respectively, which matches somewhat with my usual running results a few years back (but a bit worse). It is in any case better than what I thought originally, yay! Also, Strava activity for some more statistics (note that my Garmin says it was not 800+ meters of altitude ): I'd embed a Veloviewer nice 3D-map but I can't seem to get the embed option, hmm TODO: train more endurance, train more technique, train in more various conditions!

Mario Lang: A Raspberry Pi Zero in a Handy Tech Active Star 40 Braille Display

TL;DR: I put a $5 Raspberry Pi Zero, a Bluetooth USB dongle, and the required adapter cable into my new Handy Tech Active Star 40 braille display. An internal USB port provides the power. This has transformed my braille display into an ARM-based, monitorless, Linux laptop that has a keyboard and a braille display. It can be charged/powered via USB so it can also be run from a power bank or a solar charger, thus potentially being able to run for days, rather than just hours, without needing a standard wall-jack. [picture: a Raspberry Pi Zero embedded within an Active Star 40] [picture: a braille display with a keyboard on top and a Raspberry Pi Zero inside]
Some Background on Braille Display Form Factors Braille displays come in various sizes. There are models tailored for desktop use (with 60 cells or more), models tailored for portable use with a laptop (usually with 40 cells), and, nowadays, there are even models tailored for on-the-go use with a smartphone or similar (with something like 14 or 18 cells). Back in the old days, braille displays were rather massive. A 40-cell braille display was typically about the size of a 13" laptop. In modern times, manufacturers have managed to reduce the size of the internals such that a 40-cell display can be placed in front of a laptop or keyboard instead of placing the laptop on top of the braille display. While this is a nice achievement, I personally haven't found it to be very convenient because you now have to place two physically separate devices on your lap. It's OK if you have a real desk, but, at least in my opinion, if you try to use your laptop as its name suggests, it's actually inconvenient to use a small form factor, 40-cell display. For this reason, I've been waiting for a long-promised new model in the Handy Tech Star series. In 2002, they released the Handy Tech Braille Star 40, which is a 40-cell braille display with enough space to put a laptop directly on top of it. To accommodate larger laptop models, they even built in a little platform at the back that can be pulled out to effectively enlarge the top surface. Handy Tech has now released a new model, the Active Star 40, that has essentially the same layout but modernized internals. [picture: a plain Active Star 40] You can still pull out the little platform to increase the space that can be used to put something on top. [picture: an Active Star 40 with extended platform and a laptop on top] But, most conveniently, they've designed in an empty compartment, roughly the size of a modern smartphone, beneath the platform. The original idea was to actually put a smartphone inside, but this has turned out (at least to me) to not be very feasible. Fortunately, they thought about the need for electricity and added a Micro USB cable terminating within the newly created, empty compartment. My first idea was to put a conventional Raspberry Pi inside. When I received the braille display, however, we immediately noticed that a standard-sized rpi is roughly 3mm too high to fit into the empty compartment. Fortunately, though, a co-worker noticed that the Raspberry Pi Zero was available for order. The Raspberry Pi Zero is a lot thinner, and fits perfectly inside (actually, I think there's enough space for two, or even three, of them). So we ordered one, along with some accessories like a 64GB SDHC card, a Bluetooth dongle, and a Micro USB adapter cable. The hardware arrived a few days later, and was immediately bootstrapped with the assistance of very helpful friends. It works like a charm!
Technical Details The backside of the Handy Tech Active Star 40 features two USB host ports that can be used to connect devices such as a keyboard. A small form-factor, USB keyboard with a magnetic clip-on is included. When a USB keyboard is connected, and when the display is used via Bluetooth, the braille display firmware additionally offers the Bluetooth HID profile, and key press/release events received via the USB port are passed through to it. I use the Bluetooth dongle for all my communication needs. Most importantly, BRLTTY is used as a console screen reader. It talks to the braille display via Bluetooth (more precisely, via an RFCOMM channel). The keyboard connects through to Linux via the Bluetooth HID profile. Now, all that is left is network connectivity. To keep the energy consumption as low as possible, I decided to go for Bluetooth PAN. It appears that the tethering mode of my mobile phone works (albeit with a quirk), so I can actually access the internet as long as I have cell phone reception. Additionally, I configured a Bluetooth PAN access point on my desktop machines at home and at work, so I can easily (and somewhat more reliably) get IP connectivity for the rpi when I'm near one of these machines. I plan to configure a classic Raspberry Pi as a mobile Bluetooth access point. It would essentially function as a Bluetooth to ethernet adapter, and should allow me to have network connectivity in places where I don't want to use my phone.
BlueZ 5 and PAN It was a bit challenging to figure out how to actually configure Bluetooth PAN with BlueZ 5. I found the bt-pan python script (see below) to be the only way so far to configure PAN without a GUI. It handles both ends of a PAN network, configuring a server and a client. Once instructed to do so (via D-Bus) in client mode, BlueZ will create a new network device - bnep0 - once a connection to a server has been established. Typically, DHCP is used to assign IP addresses for these interfaces. In server mode, BlueZ needs to know the name of a bridge device to which it can add a slave device for each incoming client connection. Configuring an address for the bridge device, as well as running a DHCP server + IP Masquerading on the bridge, is usually all you need to do.
A Bluetooth PAN Access Point with Systemd I'm using systemd-networkd to configure the bridge device. /etc/systemd/network/pan.netdev:
[NetDev]
Name=pan
Kind=bridge
ForwardDelaySec=0
/etc/systemd/network/pan.network:
[Match]
Name=pan
[Network]
Address=0.0.0.0/24
DHCPServer=yes
IPMasquerade=yes
Now, BlueZ needs to be told to configure a NAP profile. To my surprise, there seems to be no way to do this with stock BlueZ 5.36 utilities. Please correct me if I'm wrong. Luckily, I found a very nice blog post, as well as an accommodating Python script that performs the required D-Bus calls. For convenience, I use a Systemd service to invoke the script and to ensure that its dependencies are met. /etc/systemd/system/pan.service:
[Unit]
Description=Bluetooth Personal Area Network
After=bluetooth.service systemd-networkd.service
Requires=systemd-networkd.service
PartOf=bluetooth.service
[Service]
Type=notify
ExecStart=/usr/local/sbin/pan
[Install]
WantedBy=bluetooth.target
/usr/local/sbin/pan:
#!/bin/sh
# Ugly hack to work around #787480
iptables -F
iptables -t nat -F
iptables -t mangle -F
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
exec /usr/local/sbin/bt-pan --systemd --debug server pan
This last file wouldn't be necessary if IPMasquerade= were supported in Debian right now (see #787480). After the obligatory systemctl daemon-reload and systemctl restart systemd-networkd, you can start your Bluetooth Personal Area Network with systemctl start pan.
Bluetooth PAN Client with Systemd Configuring the client is also quite easy to do with Systemd. /etc/systemd/network/pan-client.network:
[Match]
Name=bnep*
[Network]
DHCP=yes
/etc/systemd/system/pan@.service:
[Unit]
Description=Bluetooth Personal Area Network client
[Service]
Type=notify
ExecStart=/usr/local/sbin/bt-pan --debug --systemd client %I --wait
Now, after the usual configuration reloading, you should be able to connect to a specific Bluetooth access point with:
systemctl start pan@00:11:22:33:44:55
Pairing via the Command Line Of course, the server and client-side service configuration require a pre-existing pairing between the server and each of its clients. On the server, start bluetoothctl and issue the following commands:
power on
agent on
default-agent
scan on
scan off
pair XX:XX:XX:XX:XX:XX
trust XX:XX:XX:XX:XX:XX
Once you've set scan mode to on, wait a few seconds until you see the device you're looking for scroll by. Note its device address, and use it for the pair and (optional) trust commands. On the client, the sequence is essentially the same except that you don't need to issue the trust command. The server needs to trust a client in order to accept NAP profile connections from it without waiting for manual confirmation by the user. I'm actually not sure if this is the optimal sequence of commands. It might be enough to just pair the client with the server and issue the trust command on the server, but I haven't tried this yet.
Enabling Use of the Bluetooth HID Profile Essentially the same as above also needs to be done in order to use the Bluetooth HID profile of the Active Star 40 on Linux. However, instead of agent on, you need to issue the command agent KeyboardOnly. This explicitly tells bluetoothctl that you're specifically looking for a HID profile.
Configuring Bluetooth via the Command Line Feels Vague While I'm very happy that I actually managed to set all of this up, I must admit that the command-line interface to BlueZ feels a bit incomplete and confusing. I initially thought that agents were only for PIN code entry. Now that I've discovered that "agent KeyboardOnly" is used to enable the HID profile, I'm not sure anymore. I'm surprised that I needed to grab a script from a random git repository in order to be able to set up PAN. I remember, with earlier version of BlueZ, that there was a tool called pand that you could use to do all of this from the command-line. I don't seem to see anything like that for BlueZ 5 anymore. Maybe I'm missing something obvious?
Performance The data rate is roughly 120kB/s, which I consider acceptable for such a low power solution. The 1GHz ARM CPU actually feels sufficiently fast for a console/text-mode person like me. I'll rarely be using much more than ssh and emacs on it anyway.
Console fonts and screen dimensions The default dimensions of the framebuffer on the Raspberry Pi Zero are a bit unexpectedly strange. fbset reports that the screen dimension is 656x416 pixels (of course, no monitor connected). With a typical console font of 8x16, I got 82 columns and 26 lines. With a 40 cell braille display, the 82 columns are very inconvenient. Additionally, as a braille user, I would like to be able to view Unicode braille characters in addition to the normal charset on the console. Fortunately, Linux supports 512 glyphs, while most console fonts do only provide 256. console-setup can load and combine two 256-glyph fonts at once. So I added the following to /etc/default/console-setup to make the text console a lot more friendly to braille users:
SCREEN_WIDTH=80
SCREEN_HEIGHT=25
FONT="Lat15-Terminus16.psf.gz brl-16x8.psf"

Note

You need console-braille installed for brl-16x8.psf to be available.

Further Projects There's a 3.5mm audio jack inside the braille display as well. Unfortunately, there are no converters from Mini-HDMI to 3.5mm audio that I know of. It would be very nice to be able to use the sound card that is already built into the Raspberry Pi Zero, but, unfortunately, this doesn't seem possible at the moment. Alternatively, I'm looking at using a Micro USB OTG hub and an additional USB audio adapter to get sound from the Raspberry Pi Zero to the braille display's speakers. Unfortunately, the two USB audio adapters I've tried so far have run hot for some unknown reason. So I have to find some other chipset to see if the problem goes away. A little nuisance, currently, is that you need to manually power off the Raspberry, wait a few seconds, and then power down the braille display. Turning the braille display off cuts power delivery via the internal USB port. If this is accidentally done too soon then the Raspberry Pi Zero is shut down ungracefully (which is probably not the best way to do it). We're looking into connecting a small, buffering battery to the GPIO pins of the rpi, and into notifying the rpi when external power has dropped. A graceful, software-initiated shutdown can then be performed. You can think of it as being like a mini UPS for Micro USB.
The image If you are a happy owner of a Handy Tech Active Star 40 and would like to do something similar, I am happy to share my current (Raspbian Stretch based) image. In fact, if there is enough interest by other blind users, we might even consider putting a kit together that makes it as easy as possible for you to get started. Let me know if this could be of interest to you.
Thanks Thanks to Dave Mielke for reviewing the text of this posting. Thanks to Simon Kainz for making the photos for this article. And I owe a big thank you to my co-workers at Graz University of Technology who have helped me a lot to bootstrap really quickly into the rpi world.
P.S. My first tweet about this topic is just five days ago, and apart from the soundcard not working yet, I feel like the project is already almost complete! By the way, I am editing the final version of this blog posting from my newly created monitorless ARM-based Linux laptop via an ssh connection to my home machine.

10 April 2016

Russ Allbery: Largish haul

Let's see if I can scrounge through all of my now-organized directories of ebooks and figure out what I haven't recorded here yet. At least the paper books make that relatively easy, since I don't shelve them until I post them. (Yeah, yeah, I should actually make a database.) Hugh Aldersey-Williams Periodic Tales (nonfiction)
Sandra Ulbrich Almazan SF Women A-Z (nonfiction)
Radley Balko Rise of the Warrior Cop (nonfiction)
Peter V. Brett The Warded Man (sff)
Lois McMaster Bujold Gentleman Jole and the Red Queen (sff)
Fred Clark The Anti-Christ Handbook Vol. 2 (nonfiction)
Dave Duncan West of January (sff)
Karl Fogel Producing Open Source Software (nonfiction)
Philip Gourevitch We Wish to Inform You That Tomorrow We Will Be Killed With Our Families (nonfiction)
Andrew Groen Empires of EVE (nonfiction)
John Harris @ Play (nonfiction)
David Hellman & Tevis Thompson Second Quest (graphic novel)
M.C.A. Hogarth Earthrise (sff)
S.L. Huang An Examination of Collegial Dynamics... (sff)
S.L. Huang & Kurt Hunt Up and Coming (sff anthology)
Kameron Hurley Infidel (sff)
Kevin Jackson-Mead & J. Robinson Wheeler IF Theory Reader (nonfiction)
Rosemary Kirstein The Lost Steersman (sff)
Rosemary Kirstein The Language of Power (sff)
Merritt Kopas Videogames for Humans (nonfiction)
Alisa Krasnostein & Alexandra Pierce (ed.) Letters to Tiptree (nonfiction)
Mathew Kumar Exp. Negatives (nonfiction)
Ken Liu The Grace of Kings (sff)
Susan MacGregor The Tattooed Witch (sff)
Helen Marshall Gifts for the One Who Comes After (sff collection)
Jack McDevitt Coming Home (sff)
Seanan McGuire A Red-Rose Chain (sff)
Seanan McGuire Velveteen vs. The Multiverse (sff)
Seanan McGuire The Winter Long (sff)
Marc Miller Agent of the Imperium (sff)
Randal Munroe Thing Explainer (graphic nonfiction)
Marguerite Reed Archangel (sff)
J.K. Rowling Harry Potter: The Complete Collection (sff)
K.J. Russell Tides of Possibility (sff anthology)
Robert J. Sawyer Starplex (sff)
Bruce Schneier Secrets & Lies (nonfiction)
Mike Selinker (ed.) The Kobold Game to Board Game Design (nonfiction)
Douglas Smith Chimerascope (sff collection)
Jonathan Strahan Fearsome Journeys (sff anthology)
Nick Suttner Shadow of the Colossus (nonfiction)
Aaron Swartz The Boy Who Could Change the World (essays)
Caitlin Sweet The Pattern Scars (sff)
John Szczepaniak The Untold History of Japanese Game Developers I (nonfiction)
John Szczepaniak The Untold History of Japanese Game Developers II (nonfiction)
Jeffrey Toobin The Run of His Life (nonfiction)
Hayden Trenholm Blood and Water (sff anthology)
Coen Teulings & Richard Baldwin (ed.) Secular Stagnation (nonfiction)
Ursula Vernon Book of the Wombat 2015 (graphic nonfiction)
Ursula Vernon Digger (graphic novel) Phew, that was a ton of stuff. A bunch of these were from two large StoryBundle bundles, which is a great source of cheap DRM-free ebooks, although still rather hit and miss. There's a lot of just fairly random stuff that's been accumulating for a while, even though I've not had a chance to read very much. Vacation upcoming, which will be a nice time to catch up on reading.

12 January 2016

Bits from Debian: New Debian Developers and Maintainers (November and December 2015)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

29 December 2015

Daniel Pocock: Real-Time Communication in FOSDEM 2016 main track

FOSDEM is nearly here and Real-Time Communications is back with a bang. Whether you are keen on finding the perfect privacy solution, innovative new features or just improving the efficiency of existing telephony, you will find plenty of opportunities at FOSDEM. Main track Saturday, 30 January, 17:00 Dave Neary presents How to run a telco on free software. This session is of interest to anybody building or running a telco-like service or any system administrator keen to look at a practical application of cloud computing with OpenStack. Sunday, 31 January, 10:00 is my own presentation on Free Communications with Free Software. This session looks at the state of free communications, especially open standards like SIP, XMPP and WebRTC and practical solutions like DruCall (for Drupal), Lumicall (for Android) and much more. Sunday, 31 January, 11:00 Guillaume Roguez and Adrien B raud from Savoir-faire Linux present Building a peer-to-peer network for Real-Time Communication. They explain how their Ring solution, based on OpenDHT, can provide a true peer-to-peer solution. and much, much more....
  • XMPP Summit 19 is on January 28 and 29, the Thursday and Friday before FOSDEM as part of the FOSDEM Fringe.
  • The FOSDEM Beer Night on Friday, 29 January provides a unique opportunity for Real-Time Communication without software
  • The Real-Time Lounge will operate in the K building over both days of FOSDEM, come and meet the developers of your favourite RTC projects
  • The Real-Time dev-room is the successor of the previous XMPP and Telephony dev-rooms. The Real-Time dev-room is in K.3.401 and the schedule will be announced shortly.
Volunteers and sponsors still needed Please come and join the FreeRTC mailing list to find out more about ways to participate, the Saturday night dinner and other opportunities. The FOSDEM team is still fundraising. If your company derives benefit from free software and events like FOSDEM, please see the sponsorship pages.

11 November 2015

Bits from Debian: New Debian Developers and Maintainers (September and October 2015)

The following contributors got their Debian Developer accounts in the last two months: The following contributors were added as Debian Maintainers in the last two months: Congratulations!

4 November 2015

Vincent Sanders: I am not a number I am a free man

Once more the NetSurf developers tried to escape from a mysterious village by writing web browser code.

Michael Drake, Daniel Silverstone, Dave Higton and Vincent Sanders at NetSurf Developer workshop
The sixth developer workshop was an opportunity for us to gather together in person to contribute to NetSurf.

We were hosted by Codethink in their Manchester offices which provided a comfortable and pleasant space to work in.

Four developers managed to attend in person from around the UK: Michael Drake, Daniel Silverstone, Dave Higton and Vincent Sanders.

The main focus of the weekends activities was to work on improving our JavaScript implementation. At the previous workshop we had laid the groundwork for a shift to the Duktape JavaScript engine and since then put several hundred hours of time into completing this transition.

During this weekend Daniel built upon this previous work and managed to get DOM events working. This was a major missing piece of implementation which will mean NetSurf will be capable of interpreting JavaScript based web content in a more complete fashion. This work revealed several issues with our DOM library which were also resolved.

We were also able to merge several improvements provided by the Duktape upstream maintainer Sami Vaarala which addressed performance problems with regular expressions which were causing reports of "hangs" on slow processors.

The responsiveness of Sami and the Ducktape project has been a pleasant surprise making our switch to the library look like an increasingly worthwhile effort.

Overall some good solid progress was made on JavaScript support. Around half of the DOM interfaces in the specifications have now been implemented leaving around fifteen hundred methods and properties remaining. There is an aim to have this under the thousand mark before the new year which should result in a generally useful implementation of the basic interfaces.

Once the DOM interfaces have been addressed our focus will move onto the dynamic layout engine necessary to allow rendering of the changing content.

The 3.4 release is proposed to occur sometime early in the new year and depends on getting the JavaScript work to a suitable stable state.

Dave joined us for the first time, he was principally concerned with dealing with bugs and the bug tracker. It was agreeable to have a new face at the meeting and some enthusiasm for the RISC OS port which has been lacking an active maintainer for some time.

The turnout for this workshop was the same as the previous one and the issues raised then are still true. We still have a very small active core team who can commit only limited time which is making progress very slow and are lacking significant maintenance for several frontends.

Overall we managed to pack 16 hours of work into the weekend and addressed several significant problems.

24 September 2015

Petter Reinholdtsen: The life and death of a laptop battery

When I get a new laptop, the battery life time at the start is OK. But this do not last. The last few laptops gave me a feeling that within a year, the life time is just a fraction of what it used to be, and it slowly become painful to use the laptop without power connected all the time. Because of this, when I got a new Thinkpad X230 laptop about two years ago, I decided to monitor its battery state to have more hard facts when the battery started to fail. First I tried to find a sensible Debian package to record the battery status, assuming that this must be a problem already handled by someone else. I found battery-stats, which collects statistics from the battery, but it was completely broken. I sent a few suggestions to the maintainer, but decided to write my own collector as a shell script while I waited for feedback from him. Via a blog post about the battery development on a MacBook Air I also discovered batlog, not available in Debian. I started my collector 2013-07-15, and it has been collecting battery stats ever since. Now my /var/log/hjemmenett-battery-status.log file contain around 115,000 measurements, from the time the battery was working great until now, when it is unable to charge above 7% of original capacity. My collector shell script is quite simple and look like this:
#!/bin/sh
# Inspired by
# http://www.ifweassume.com/2013/08/the-de-evolution-of-my-laptop-battery.html
# See also
# http://blog.sleeplessbeastie.eu/2013/01/02/debian-how-to-monitor-battery-capacity/
logfile=/var/log/hjemmenett-battery-status.log
files="manufacturer model_name technology serial_number \
    energy_full energy_full_design energy_now cycle_count status"
if [ ! -e "$logfile" ] ; then
    (
	printf "timestamp,"
	for f in $files; do
	    printf "%s," $f
	done
	echo
    ) > "$logfile"
fi
log_battery()  
    # Print complete message in one echo call, to avoid race condition
    # when several log processes run in parallel.
    msg=$(printf "%s," $(date +%s); \
	for f in $files; do \
	    printf "%s," $(cat $f); \
	done)
    echo "$msg"
 
cd /sys/class/power_supply
for bat in BAT*; do
    (cd $bat && log_battery >> "$logfile")
done
The script is called when the power management system detect a change in the power status (power plug in or out), and when going into and out of hibernation and suspend. In addition, it collect a value every 10 minutes. This make it possible for me know when the battery is discharging, charging and how the maximum charge change over time. The code for the Debian package is now available on github. The collected log file look like this:
timestamp,manufacturer,model_name,technology,serial_number,energy_full,energy_full_design,energy_now,cycle_count,status,
1376591133,LGC,45N1025,Li-ion,974,62800000,62160000,39050000,0,Discharging,
[...]
1443090528,LGC,45N1025,Li-ion,974,4900000,62160000,4900000,0,Full,
1443090601,LGC,45N1025,Li-ion,974,4900000,62160000,4900000,0,Full,
I wrote a small script to create a graph of the charge development over time. This graph depicted above show the slow death of my laptop battery. But why is this happening? Why are my laptop batteries always dying in a year or two, while the batteries of space probes and satellites keep working year after year. If we are to believe Battery University, the cause is me charging the battery whenever I have a chance, and the fix is to not charge the Lithium-ion batteries to 100% all the time, but to stay below 90% of full charge most of the time. I've been told that the Tesla electric cars limit the charge of their batteries to 80%, with the option to charge to 100% when preparing for a longer trip (not that I would want a car like Tesla where rights to privacy is abandoned, but that is another story), which I guess is the option we should have for laptops on Linux too. Is there a good and generic way with Linux to tell the battery to stop charging at 80%, unless requested to charge to 100% once in preparation for a longer trip? I found one recipe on askubuntu for Ubuntu to limit charging on Thinkpad to 80%, but could not get it to work (kernel module refused to load). I wonder why the battery capacity was reported to be more than 100% at the start. I also wonder why the "full capacity" increases some times, and if it is possible to repeat the process to get the battery back to design capacity. And I wonder if the discharge and charge speed change over time, or if this stay the same. I did not yet try to write a tool to calculate the derivative values of the battery level, but suspect some interesting insights might be learned from those. Update 2015-09-24: I got a tip to install the packages acpi-call-dkms and tlp (unfortunately missing in Debian stable) packages instead of the tp-smapi-dkms package I had tried to use initially, and use 'tlp setcharge 40 80' to change when charging start and stop. I've done so now, but expect my existing battery is toast and need to be replaced. The proposal is unfortunately Thinkpad specific.

7 September 2015

Ben Armstrong: Hike at Blomidon Park: Late Summer, 2015

I had the wonderful privilege to go camping and hiking with my kids scouting group, the Pathfinders of Tantallon SDA church. The day started with a quick trip to Pugwash with one of the leaders to bring back some chairs to their school, and then we headed back out to Blomidon to meet up with the group. Click the photo below to start the slideshow.
The road trip started early to fetch some chairsThe road trip started early to fetch some chairs click to start
The smudgy truck windows make an interesting filter Still an hour or more away from our first stop More funky filtering, this time with trees participating Wentworth valley taken over the cluttered dash We disturbed a great blue heron s breakfast at Wallace The beach at Pugwash SDA Camp where we loaded the chairs Trucking along past Truro After dropping off chairs, finally approaching Blomidon Getting very close to Blomidon Some bikers out to enjoy the views Interesting white berries Interesting red berries My first up close look at the point with my hiking buddy, Dave, on the first day An experimental panorama. Not sure I have the knack for keeping the horizon straight. We must bring the group out here tomorrow! Fast ringneck snake! Hard to get a clear shot Pre-dawn over the campground The first blush of coming dawn The Moon and Venus just before dawn Evergreens surrounding our camp site, pre-dawn Seems I m still the only one up Half of the tents on the spacious group site Half of the tents on the spacious group site My daughter, the artist My two youngest and their best friend Some relaxing down time after breakfast Not sure who said what, but apparently they were hilarious. :) The smoke was a bit much for my eldest Geoff entertaining the troops Dave making breakfast Dave making breakfast Breakfast just wrapping up Relaxing while we finish breakfast Looks like that needs some tweaking A bit too smoky The whole group The whole group Just goofing around Who s winning? Enjoying the last embers of the breakfast fire before heading to Jodrey Trail I admire this young lady s great eye for photography She has some sweet gear A lot of old hardwoods out here Dave did this hike with me yesterday Excellent hiking buddy! Words can t describe how much more stunning these views are in person All the cameras came out Got to get that perfect shot! A tree clinging to the eroding ground above the sheer cliff A lookoff on Jodrey Trail A lookoff on Jodrey Trail Lining up her shot /a> A lookoff on Jodrey Trail A lookoff on Jodrey Trail A lookoff on Jodrey Trail A fern with sharply serrated sturdy leaves I m not familiar with Breaking camp at group site 404 One final chance to enjoy the view from the park entrance before heading home
facebooktwittergoogle_plusredditpinterestlinkedinmail

11 June 2015

John Goerzen: Roundup of remote encrypted deduplicated backups in Linux

Since I wrote last about Linux backup tools, back in a 2008 article about BackupPC and similar toools and a 2011 article about dedpulicating filesystems, I ve revisited my personal backup strategy a bit. I still use ZFS, with my tool simplesnap that I wrote about in 2014 to perform local backups to USB drives, which get rotated offsite periodically. This has the advantage of being very fast and very secure, but I also wanted offsite backups over the Internet. I began compiling criteria, which ran like this: So, how did things stack up? Didn t meet criteria A lot of popular tools didn t meet the criteria. Here are some that I considered: Obnam and Attic/Borg Backup Obnam and Attic (and its fork Borg Backup) are both programs that have a similar concept at their heart, which is roughly this: the backup repository stores small chunks of data, indexed by a checksum. Directory trees are composed of files that are assembled out of lists of chunks, so if any given file matches another file already in the repository somewhere, the added cost is just a small amount of metadata. Obnam was eventually my tool of choice. It has built-in support for sftp, but its reliance on local filesystem semantics is very conservative and it works fine atop davfs2 (and, I d imagine, other S3-backed FUSE filesystems). Obnam s repository format is carefully documented and it is very conservatively designed through and through clearly optimized for integrity above all else, including speed. Just what a backup program should be. It has a lot of configurable options, including chunk size, caching information (dedup tables can be RAM-hungry), etc. These default to fairly conservative values, and the performance of Obnam can be significantly improved with a few simple config tweaks. Attic was also a leading contender. It has a few advantages over Obnam, actually. One is that it uses an rsync-like rolling checksum method. This means that if you add 1 byte at the beginning of a 100MB file, Attic will upload a 1-byte chunk and then reference the other chunks after that, while Obnam will have to re-upload the entire file, since its chunks start at the beginning of the file in fixed sizes. (The only time Obnam has chunks smaller than its configured chunk size is with very small files or the last chunk in a file.) Another nice feature of Attic is its use of packs , where it groups chunks together into larger pack files. This can have significant performance advantages when backing up small files, especially over high-latency protocols and links. On the downside, Attic has a hardcoded fairly small chunksize that gives it a heavy metadata load. It is not at all as configurable as Obnam, and unlike Obnam, there is nothing you can do about this. The biggest reason I avoided it though was that it uses a single monolithic index file that would have to be uploaded from scratch after each backup. I calculated that this would be many GB in size, if not even tens of GB, for my intended use, and this is just not practical over the Internet. Attic assumes that if you are going remote, you run Attic on the remote so that the rewrite of this file doesn t have to send all the data across the network. Although it does work atop davfs2, this support seemed like an afterthought and is clearly not very practical. Attic did perform much better than Obnam in some ways, largely thanks to its pack support, but the monolothic index file was going to make it simply impractical to use. There is a new fork of Attic called Borg that may, in the future, address some of these issues. Brief honorable mentions: bup, zbackup, syncany There are a few other backup tools that people are talking about which do dedup. bup is frequently mentioned, but one big problem with it is that it has no way to delete old data! In other words, it is more of an archive than a backup tool. zbackup is a really neat idea it dedups anything you feed at it, such as a tar stream or zfs send stream, and can encrypt, too. But it doesn t (yet) support removing old data either. syncany is fundamentally a syncing tool, but can also be used from the command line to do periodic syncs to a remote. It supports encryption, sftp, webdave, etc. natively, and runs on quite a number of platforms easily. However, it doesn t store a number of POSIX attributes, such as hard links, uid/gid owner, ACL, xattr, etc. This makes it impractical for use for even backing up my home directory; I make fairly frequent use of ln, both with and without -s. If there were some tool to create/restore archives of metadata, that might work out better.

28 May 2015

Sven Hoexter: RMS, free software and where I fail the goal

You might have already read this comment by RMS in the Guardian. That comment and a recent discussion about the relevance of GPL changes post GPLv2 made me think again about the battle RMS started to fight. While some think RMS should "retire", at least I still fail on my personal goal to not depend on non-free software and services. So for me this battle is far from over, and here is my personal list of "non-free debt" I've to pay off. general purpose systems aka your computer Looking at the increasing list of firmware blobs required to use a GPU, wireless chipsets and more and more wired NICs, the situation seems to be worse then in the late 90s. Back then the primary issue was finding supported hardware, but the driver was free. Nowadays even the open sourced firmware often requires obscure patched compilers to build. If I look at this stuff I think the OpenBSD project got that right with the more radical position. Oh and then there is CPU microcode. I'm not yet sure what to think about it, but in the end it's software and it's not open source. So it's non-free software running on my system. Maybe my memory is blurred due to the fact, that the seperation of firmware from the Linux kernel, and proper firmware loading got implemented only years later. I remember the discussion about the pwc driver and its removal from Linux. Maybe the situation wasn't better at that time but the firmware was just hidden inside the Linux driver code? On my system at work I've to add the Flash plugin to the list due to my latest test with Prezi which I'll touch later. I also own a few Humble Indie bundles. I played parts of Osmos after a recommendation by Joey Hess, I later finished to play through Limbo and I got pretty far with Machinarium on a Windows system I still had at that time. I also tried a few others but never got far or soon lost interest. Another thing I can not really get rid of is unrar because of stuff I need to pull from xda-developer links just to keep a cell phone running. Update: Josh Triplett pointed out that there is unar available in the Debian archive. And indeed that one works on the rar file I just extracted. Android ecosystem I will soon get rid of a stock S3 mini and try to replace it with a moto g loaded with CyanogenMod. That leaves me with a working phone with a OS that just works because of a shitload of non-free blobs. The time and work required to get there is another story. Among others you need a new bootloader that requires a newer fastboot compared to what we have in Jessie, and later you also need the newer adb to be able to sideload the CM image. There I gave in and just downloaded the pre build SDK from Google. And there you've another binary I did not even try to build from source. Same for the CM image itself, though that's not that much different from using a GNU/Linux distribution if you ignore the trust issues. It's hard to trust the phone I've build that way, but it's the best I can get at the moment with at least some bigger chunks of free software inside. So let's move to the applications on the phone. I do not use GooglePlay, so I rely on f-droid and freeware I can download directly from the vendor. "Cloud" services This category mixes a lot with the stuff listed above, most of them are not only an application, in fact Threema and Wunderlist are useless without the backend service. And Opera is just degraded to a browser - and to be replaced with Firefox - if you discount the compression proxy. The other big addition in this category is Prezi. We tried it out at work after it got into my focus due to a post by Dave Aitel. It's kind of the poster child of non-freeness. It requires a non-free, unstable, insecure and half way deprecated browser plugin to work, you can not download your result in a useful format, you've to buy storage for your presentation at this one vendor, you've to pay if you want to keep your presentation private. It's the perfect lockin situation. But still it's very convenient, prevents a lot of common mistakes you can make when you create a presentation and they invented a new concept of presenting. I know about impress.js(hosted on a non-free platform by the way, but at least you can export it from there) and I also know about hovercraft. I'm impressed by them, but it's still not close to the ease of use of Prezi. So here you can also very prominently see the cost of free and non-free software. Invest the time and write something cool with CSS3 and impress.js or pay Prezi to just klick yourself through. To add something about the instability - I had to use a windows laptop for presenting with Prezi because the Flash plugin on Jessie crashed in the presentation mode, I did not yet check the latest Flash update. I guess that did not make the situation worse, it already is horrible. Update: Daniel Kahn Gillmore pointed out that you can combine inkscape with sozi, though the Debian package is in desperate need for an active maintainer, see also #692989. I also use kind of database services like duden.de and dict.cc. When I was younger you bought such things printed on dead trees but they did not update very well. Thinking a bit further, a Certification Authority is not only questionable due to the whole trust issue, they also provide OCSP responder as kind of a web service. And I've already had the experience what the internet looks like when the OCSP systems of GlobalSign failed. So there is still a lot to fight for and a lot of "personal non-free debt" to pay off.

Next.

Previous.